Add DedupingBucketRateLimiter #6025
Merged
+1,127
−7
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Why are the changes needed?
Our use of a
BucketRateLimiter
withDelayingQueue
is problematic when repeatedly inserting the same items. The former continually moves out the rate limit token time with each new item (no deduping), while the latter dedupes insertions. The net effect is that under load newly inserted items end up with a token time way in the future while the queue happily drains to empty with items processed at a much lower than expected rate.What changes were proposed in this pull request?
This PR adds a new
DedupingBucketRateLimiter
that maintains historic reservations per item and skips taking out a new reservation when a duplicate item is found with an outstanding reservation. Reservations are dropped from the map whenForget
is called, which happens after the item is fully processed. This is similar to how the exponential backoff rate limiter works. See here.How was this patch tested?
Ran new rate limiter under high execution load and verify that queue is kept full but not overflown.
Check all the applicable boxes